Optimal function approximation with ReLU neural networks

نویسندگان

چکیده

In this paper, we consider the optimal approximations of univariate functions with feed-forward ReLU neural networks. We attempt to answer following questions. For given function and network, what is minimal possible approximation error? How fast does error decrease network size? Is attainable by current training techniques? Theoretically, introduce necessary sufficient conditions for convex functions. give lower upper bounds errors, rate that measures how decreases size. architectures are presented generate approximations. then propose an algorithm compute prove its convergence. conduct experiments validate effectiveness compare other approaches. also demonstrate theoretical limit errors not attained networks trained stochastic gradient descent optimization, which indicates expressive power has been exploited full potential.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Function approximation with ReLU-like zonal function networks

A zonal function (ZF) network on the q dimensional sphere S is a network of the form x 7→ ∑n k=1 akφ(x ·xk) where φ : [−1, 1]→ R is the activation function, xk ∈ S are the centers, and ak ∈ R. While the approximation properties of such networks are well studied in the context of positive definite activation functions, recent interest in deep and shallow networks motivate the study of activation...

متن کامل

Optimal approximation of continuous functions by very deep ReLU networks

We prove that deep ReLU neural networks with conventional fully-connected architectures with W weights can approximate continuous ν-variate functions f with uniform error not exceeding aνωf (cνW −2/ν), where ωf is the modulus of continuity of f and aν , cν are some ν-dependent constants. This bound is tight. Our construction is inherently deep and nonlinear: the obtained approximation rate cann...

متن کامل

Nonparametric regression using deep neural networks with ReLU activation function

Consider the multivariate nonparametric regression model. It is shown that estimators based on sparsely connected deep neural networks with ReLU activation function and properly chosen network architecture achieve the minimax rates of convergence (up to log n-factors) under a general composition assumption on the regression function. The framework includes many well-studied structural constrain...

متن کامل

Optimal approximation of piecewise smooth functions using deep ReLU neural networks

We study the necessary and sufficient complexity of ReLU neural networks—in terms of depth and number of weights—which is required for approximating classifier functions in an L-sense. As a model class, we consider the set E(R) of possibly discontinuous piecewise C functions f : [−1/2, 1/2] → R, where the different “smooth regions” of f are separated by C hypersurfaces. For given dimension d ≥ ...

متن کامل

Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations

This article concerns the expressive power of depth in neural nets with ReLU activations and bounded width. We are particularly interested in the following questions: what is the minimal width wmin(d) so that ReLU nets of width wmin(d) (and arbitrary depth) can approximate any continuous function on the unit cube [0, 1] aribitrarily well? For ReLU nets near this minimal width, what can one say ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neurocomputing

سال: 2021

ISSN: ['0925-2312', '1872-8286']

DOI: https://doi.org/10.1016/j.neucom.2021.01.007